Home | | Latest | About | Random
# 03a Further understanding: Equivalence relations.
We have encountered this notion of **row equivalence**, where we denote $$
A \stackrel{\text{row}}\sim B
$$if $A$ can be brought to become $B$ via a sequence of elementary row operations. Verbally we say "$A$ is row equivalent to $B$" when you see $A \stackrel{\text{row}}\sim B$.
And if you want to denote "$A$ is **not** row equivalent to $B$", you can write $A \stackrel{\text{row}}{\not\sim} B$ or $A\ \cancel{\stackrel{\text{row}}\sim} B$. Make the strike-through line clear enough to indicate it.
We make a few, perhaps, "obvious" observations here:
- (1) For any matrix $A$, we have $A \stackrel{\text{row}}\sim A$.
- Indeed, if we just scale a row by $1$, we get the original matrix back.
- (2) For any two matrices $A$ and $B$, if $A \stackrel{\text{row}}\sim B$, then $B \stackrel{\text{row}}\sim A$.
- Indeed, since elementary row operations are all reversible, if we can bring $A$ to $B$ via a sequence of elementary row operations, then we can bring $B$ back to $A$ via a sequence of corresponding elementary row operations.
- (3) For three matrices $A,B,C$, if $A \stackrel{\text{row}}\sim B$ and $B \stackrel{\text{row}}\sim C$, then we have $A \stackrel{\text{row}}\sim C$.
- Indeed, if we can bring $A$ to $B$ via a sequence of elementary row operations, and $B$ to $C$ via a sequence of elementary row operations, then we can certainly bring $A$ to $C$ via a sequence of elementary row operations -- we just need to continue the operations from $B$!
Let us use some fancy words to describe these properties of $\stackrel{\text{row}}\sim$ on matrices:
- Since every matrix is row equivalent to itself, we say $\stackrel{\text{row}}\sim$ is **reflexive**.
- Since if $A \stackrel{\text{row}}\sim B$ implies $B \stackrel{\text{row}}\sim A$, we say $\stackrel{\text{row}}\sim$ is **symmetric**.
- And if $A \stackrel{\text{row}}\sim B$ and $B \stackrel{\text{row}}\sim C$, implies we have $A \stackrel{\text{row}}\sim C$, we say $\stackrel{\text{row}}\sim$ is **transitive**.
So $\stackrel{\text{row}}\sim$ is **reflexive**, **symmetric**, and **transitive**.
We now make some definitions.
We say $\sim$ is a **binary relation** on a collection of objects $X$ if for any two objects $a,b$ n $X$, either $a\sim b$ is true, or $a\sim b$ is false. In the case where $a\sim b$ is false, we write $a\not\sim b$. Essentially, $\sim$ is a function on an ordered pair $(a,b)$ of things from $X \times X$, and assigning to this ordered pair true or false.
And if a binary relation $\sim$ on a collection of objects $X$ is
- (1) reflexive, namely for any $a\in X$, we have $a\sim a$;
- (2) symmetric, namely for any $a,b \in X$, $a \sim b$ implies $b\sim a$; and
- (3) transitive, namely for any $a,b,c\in X$, if $a\sim b$ and $b\sim c$ then $a\sim c$,
then we say $\sim$ is an **equivalence relation** on this collection of objects.
So $\stackrel{\text{row}}\sim$ is an **equivalence relation** on matrices!
Before we think about why we might care, let us think about some relations that we know, and think about whether they are equivalence relations or not. Do you think $=$ on the real numbers is an equivalence relation? How about $<$ on the real numbers? What about the notion of geometric congruence of geometric figures on the plane?
Ok, so what? As it turns out, if we have equivalence relation $\sim$ on a collection of objects $X$, it **partitions (chops up) this collection** into disjoint **equivalence classes**. Disjoint meaning each class do not contain a member of another class. And within each class, it consists of members all equivalent to each other! (Recall the mafia family analogy...)
If $\sim$ is an equivalence relation on a collection $X$, we denote $[a] = \{x\in X : x\sim a\}$ to be the **equivalence class** of $a$, that is, all the things in $X$ that are equivalent to $a$.
So for our situation of row equivalence $\stackrel{\text{row}}\sim$ on (real, say) matrices, since it is an equivalence relation on matrices, it partitions the collection of matrices into equivalent classes, where within each class are matrices that can be brought to another via a sequence of elementary row operations. And matrices from different row equivalence class can never be brought to each other via a sequence of elementary row operations.
With row equivalence $\stackrel{\text{row}}\sim$ recall we have a nice way of determining whether $A\stackrel{\text{row}}\sim B$, as it turns out $A\stackrel{\text{row}}\sim B$ if and only if $\text{RREF}(A) =\text{RREF}(B)$.
And what does mean in terms of solving system of linear equations? Well, since row operations preserves solution sets, so if a linear system $(LS)$ has some augmented matrix that is row equivalent to the augmented matrix of another linear system $(LS')$, then they have the same solution set (provided that they have the same variable names).
The "converse" is not true but for some silly reasons: If two matrices are row equivalent to each other then they needs to be of the same size. However, we can add a bunch of trivial equations such as $0=0$ to our system, giving a new system with the same solution set but with a different sized augmented matrix.
However, if we impose this size condition, then whenever any two linear systems $(LS),(LS')$ have the same solution sets **and** they have the same number of equations and variables, then their augmented matrices must be row equivalent to each other. (Think about why that is!)